31 research outputs found

    Quantifying the time course of visual object processing using ERPs: it's time to up the game

    Get PDF
    Hundreds of studies have investigated the early ERPs to faces and objects using scalp and intracranial recordings. The vast majority of these studies have used uncontrolled stimuli, inappropriate designs, peak measurements, poor figures, and poor inferential and descriptive group statistics. These problems, together with a tendency to discuss any effect p < 0.05 rather than to report effect sizes, have led to a research field very much qualitative in nature, despite its quantitative inspirations, and in which predictions do not go beyond condition A > condition B. Here we describe the main limitations of face and object ERP research and suggest alternative strategies to move forward. The problems plague intracranial and surface ERP studies, but also studies using more advanced techniques – e.g., source space analyses and measurements of network dynamics, as well as many behavioral, fMRI, TMS, and LFP studies. In essence, it is time to stop amassing binary results and start using single-trial analyses to build models of visual perception

    Robust correlation analyses: false positive and power validation using a new open source Matlab toolbox

    Get PDF
    Pearson’s correlation measures the strength of the association between two variables. The technique is, however, restricted to linear associations and is overly sensitive to outliers. Indeed, a single outlier can result in a highly inaccurate summary of the data. Yet, it remains the most commonly used measure of association in psychology research. Here we describe a free Matlab(R) based toolbox (http://sourceforge.net/projects/robustcorrtool/) that computes robust measures of association between two or more random variables: the percentage-bend correlation and skipped-correlations. After illustrating how to use the toolbox, we show that robust methods, where outliers are down weighted or removed and accounted for in significance testing, provide better estimates of the true association with accurate false positive control and without loss of power. The different correlation methods were tested with normal data and normal data contaminated with marginal or bivariate outliers. We report estimates of effect size, false positive rate and power, and advise on which technique to use depending on the data at hand

    LIMO EEG: A Toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data

    Get PDF
    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses

    Modeling single-trial ERP reveals modulation of bottom-up face visual processing by top-down task constraints (in some subjects)

    Get PDF
    We studied how task constraints modulate the relationship between single-trial event-related potentials (ERPs) and image noise. Thirteen subjects performed two interleaved tasks: on different blocks, they saw the same stimuli, but they discriminated either between two faces or between two colors. Stimuli were two pictures of red or green faces that contained from 10 to 80% of phase noise, with 10% increments. Behavioral accuracy followed a noise dependent sigmoid in the identity task but was high and independent of noise level in the color task. EEG data recorded concurrently were analyzed using a single-trial ANCOVA: we assessed how changes in task constraints modulated ERP noise sensitivity while regressing out the main ERP differences due to identity, color, and task. Single-trial ERP sensitivity to image phase noise started at about 95–110 ms post-stimulus onset. Group analyses showed a significant reduction in noise sensitivity in the color task compared to the identity task from about 140 ms to 300 ms post-stimulus onset. However, statistical analyses in every subject revealed different results: significant task modulation occurred in 8/13 subjects, one showing an increase and seven showing a decrease in noise sensitivity in the color task. Onsets and durations of effects also differed between group and single-trial analyses: at any time point only a maximum of four subjects (31%) showed results consistent with group analyses. We provide detailed results for all 13 subjects, including a shift function analysis that revealed asymmetric task modulations of single-trial ERP distributions. We conclude that, during face processing, bottom-up sensitivity to phase noise can be modulated by top-down task constraints, in a broad window around the P2, at least in some subjects

    Time course and robustness of ERP object and face differences

    Get PDF
    Conflicting results have been reported about the earliest “true” ERP differences related to face processing, with the bulk of the literature focusing on the signal in the first 200 ms after stimulus onset. Part of the discrepancy might be explained by uncontrolled low-level differences between images used to assess the timing of face processing. In the present experiment, we used a set of faces, houses, and noise textures with identical amplitude spectra to equate energy in each spatial frequency band. The timing of face processing was evaluated using face–house and face–noise contrasts, as well as upright-inverted stimulus contrasts. ERP differences were evaluated systematically at all electrodes, across subjects, and in each subject individually, using trimmed means and bootstrap tests. Different strategies were employed to assess the robustness of ERP differential activities in individual subjects and group comparisons. We report results showing that the most conspicuous and reliable effects were systematically observed in the N170 latency range, starting at about 130–150 ms after stimulus onset

    Contribution of Color Information in Visual Saliency Model for Videos

    No full text
    International audienceMuch research has been concerned with the contribution of the low level features of a visual scene to the deployment of visual attention. Bottom-up saliency models have been developed to predict the location of gaze according to these features. So far, color besides to brightness, contrast and motion is considered as one of the primary features in computing bottom-up saliency. However, its contribution in guiding eye movements when viewing natural scenes has been debated. We investigated the contribution of color information in a bottom-up visual saliency model. The model efficiency was tested using the experimental data obtained on 45 observers who were eye tracked while freely exploring a large data set of color and grayscale videos. The two datasets of recorded eye positions, for grayscale and color videos, were compared with a luminance-based saliency model. We incorporated chrominance information to the model. Results show that color information improves the performance of the saliency model in predicting eye positions

    How do amplitude spectra influence rapid animal detection?

    No full text
    Amplitude spectra might provide information for natural scene classification. Amplitude does play a role in animal detection because accuracy suffers when amplitude is normalized. However, this effect could be due to an interaction between phase and amplitude, rather than to a loss of amplitude-only information. We used an amplitude-swapping paradigm to establish that animal detection is partly based on an interaction between phase and amplitude. A difference in false alarms for two subsets of our distractor stimuli suggests that the classification of scene environment (man-made versus natural) may also be based on an interaction between phase and amplitude. Examples of interaction between amplitude and phase are discussed

    Early ERPs to faces and objects are driven by phase, not amplitude spectrum information: evidence from parametric, test-retest, single-subject analyses

    No full text
    One major challenge in determining how the brain categorizes objects is to tease apart the contribution of low-level and high-level visual properties to behavioral and brain imaging data. So far, studies using stimuli with equated amplitude spectra have shown that the visual system relies mostly on localized information, such as edges and contours, carried by phase information. However, some researchers have argued that some event-related potentials (ERP) and blood-oxygen-level-dependent (BOLD) categorical differences could be driven by nonlocalized information contained in the amplitude spectrum. The goal of this study was to provide the first systematic quantification of the contribution of phase and amplitude spectra to early ERPs to faces and objects. We conducted two experiments in which we recorded electroencephalograms (EEG) from eight subjects, in two sessions each. In the first experiment, participants viewed images of faces and houses containing original or scrambled phase spectra combined with original, averaged, or swapped amplitude spectra. In the second experiment, we parametrically manipulated image phase and amplitude in 10% intervals. We performed a range of analyses including detailed single-subject general linear modeling of ERP data, test-retest reliability, and unique variance analyses. Our results suggest that early ERPs to faces and objects are due to phase information, with almost no contribution from the amplitude spectrum. Importantly, our results should not be used to justify uncontrolled stimuli; to the contrary, our results emphasize the need for stimulus control (including the amplitude spectrum), parametric designs, and systematic data analyses, of which we have seen far too little in ERP vision research
    corecore